transparent display
Realtime Dynamic Gaze Target Tracking and Depth-Level Estimation
Seraj, Esmaeil, Bhate, Harsh, Talamonti, Walter
Transparent Displays (TDs) are cutting-edge visual technologies that allow users to see digital content superimposed over physical environments with a variety of applications in dynamic Head-Up Displays (HUDs) in vehicles [1, 2, 3], augmented reality glasses [4, 5, 6], and smart windows in commercial buildings [7]. Their ability to blend digital information with the real world offers significant advancements in fields such as navigation, interactive advertising, robotics [8, 9, 10, 11], and immersive user interfaces and feedback [12, 13, 14, 15, 16]. Imagine a transparent display, such as a dynamic HUD in a vehicle, that not only shows essential metrics like speed, fuel levels, and engine status but also overlays navigational cues directly onto the road ahead, highlighting paths, directions, pedestrians, and other vehicles [2, 1, 17]. Beyond practical utilities, such dynamic HUDs could enhance the journey by identifying points of interest, e.g., service stations, or even serve as platforms for entertainment and work-related activities. However, realizing this vision introduces significant challenges, particularly in tracking the user's gaze across an ever-changing array of widgets and information layers projected onto the transparent display. Moreover, the accurate estimation of gaze depth levels is crucial, especially because of the display's transparency and the potential for the human gaze to interact with or pass through specific widgets, necessitating a system that can precisely discern the focus of a user's attention between virtual overlays and real-world objects to enhance both interactivity and safety [18]. The dynamic nature of this problem, coupled with the need for real-time processing, sets a complex problem space for effectively identifying and monitoring what the user is focusing on at any given moment.
- North America > United States > Texas (0.04)
- Asia > Middle East > Jordan (0.04)
- South America > Colombia > Meta Department > Villavicencio (0.04)
- North America > United States > Michigan > Wayne County > Dearborn (0.04)
- Information Technology > Human Computer Interaction > Interfaces (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
The Morning After: CES 2024 kicks off with transparent displays from Samsung and LG
I am contractually obliged to write that in at least one of our posts at CES 2024. This year, LG and Samsung brought out the big guns, both revealing similar (but technically very different) transparent displays for assembled media and analysts to gaze at and wonder… why. LG, first of all, revealed a wireless transparent OLED. The 77-inch OLED T also taps into the company's work in wireless transmission technology, reducing wiring needs to power alone. To ensure the display still offers black-enough blacks, a contrast screen rolls down into a box at the base of the OLED T. A few hours later, Samsung revealed its own transparent display, but it used MicroLED.
- North America > United States > Nevada > Clark County > Las Vegas (0.06)
- Europe > Switzerland (0.06)
- Europe > Germany (0.06)
- Information Technology > Communications (0.52)
- Information Technology > Artificial Intelligence (0.37)
- Information Technology > Hardware (0.32)
Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations
Han, Yi, Chan, Matthew, Wengrowski, Eric, Li, Zhuohuan, Tippenhauer, Nils Ole, Srivastava, Mani, Zonouz, Saman, Garcia, Luis
Camera-based autonomous systems that emulate human perception are increasingly being integrated into safety-critical platforms. Consequently, an established body of literature has emerged that explores adversarial attacks targeting the underlying machine learning models. Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems. However, the real world poses challenges related to the "survivability" of adversarial manipulations given environmental noise in perception pipelines and the dynamicity of autonomous systems. In this paper, we take a sensor-first approach. We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples. EvilEye exploits the camera's optics to induce misclassifications under a variety of illumination conditions. To generate dynamic perturbations, we formalize the projection of a digital attack into the physical domain by modeling the transformation function of the captured image through the optical pipeline. Our extensive experiments show that EvilEye's generated adversarial perturbations are much more robust across varying environmental light conditions relative to existing physical perturbation frameworks, achieving a high attack success rate (ASR) while bypassing state-of-the-art physical adversarial detection frameworks. We demonstrate that the dynamic nature of EvilEye enables attackers to adapt adversarial examples across a variety of objects with a significantly higher ASR compared to state-of-the-art physical world attack frameworks. Finally, we discuss mitigation strategies against the EvilEye attack.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Utah (0.04)
- Asia > Nepal (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)